Local convergence theorems for Newton's method from data at one point

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Semilocal Convergence Theorem for the Weierstrass Method from Data at One Point

In this paper we present a new semilocal convergence theorem from data at one point for the Weierstrass iterative method for the simultaneous computation of polynomial zeros. The main result generalizes and improves all previous ones in this area.

متن کامل

An Efficient One - Point Extrapolation Method for Linear Convergence

For iteration sequences otherwise converging linearly, the proposed one-point extrapolation method attains a convergence rate and efficiency of 1.618. This is accomplished by retaining an estimate of the linear coefficient from the previous step and using the estimate to extrapolate. For linear convergence problems, the classical Aitken-Steffensen 6 -process has an efficiency of just -J2, while...

متن کامل

ON CONVERGENCE THEOREMS FOR FUZZY HENSTOCK INTEGRALS

The main purpose of this paper is to establish different types of convergence theorems for fuzzy Henstock integrable functions, introduced by  Wu and Gong cite{wu:hiff}. In fact, we have proved fuzzy uniform convergence theorem, convergence theorem for fuzzy uniform Henstock integrable functions and fuzzy monotone convergence theorem. Finally, a necessary and sufficient condition under which th...

متن کامل

Approximate fixed point theorems for Geraghty-contractions

The purpose of this paper is to obtain necessary and suffcient conditionsfor existence approximate fixed point on Geraghty-contraction. In this paper,denitions of approximate -pair fixed point for two maps Tα , Sα and theirdiameters are given in a metric space.

متن کامل

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applicationes Mathematicae

سال: 2002

ISSN: 1233-7234,1730-6280

DOI: 10.4064/am29-4-7